13 research outputs found

    Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs)

    Full text link
    [EN] Background The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. Methods Ten impaired participants (5 males and 5 females, mean age 52 +/- 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. Results All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). Conclusions Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.This study was funded by the European Commission under the project AIDE (G.A. no: 645322), Spanish Ministry of Science and Innovation, through the projects PID2019-108310RB-I00 and PLEC2022-009424 and by the Ministry of Universities and European Union, "fnanced by European Union-Next Generation EU" through Margarita Salas grant for the training of young doctors.Catalán, JM.; Trigili, E.; Nann, M.; Blanco-Ivorra, A.; Lauretti, C.; Cordella, F.; Ivorra, E.... (2023). Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). Journal of NeuroEngineering and Rehabilitation. 20(1):1-16. https://doi.org/10.1186/s12984-023-01185-w11620

    The WGD—A Dataset of Assembly Line Working Gestures for Ergonomic Analysis and Work-Related Injuries Prevention

    No full text
    This paper wants to stress the importance of human movement monitoring to prevent musculoskeletal disorders by proposing the WGD—Working Gesture Dataset, a publicly available dataset of assembly line working gestures that aims to be used for worker’s kinematic analysis. It contains kinematic data acquired from healthy subjects performing assembly line working activities using an optoelectronic motion capture system. The acquired data were used to extract quantitative indicators to assess how the working tasks were performed and to detect useful information to estimate the exposure to the factors that may contribute to the onset of musculoskeletal disorders. The obtained results demonstrate that the proposed indicators can be exploited to early detect incorrect gestures and postures and, consequently to prevent work-related disorders. The approach is general and independent on the adopted motion analysis system. It wants to provide indications for safely performing working activities. For example, the proposed WGD can also be used to evaluate the kinematics of workers in real working environments thanks to the adoption of unobtrusive measuring systems, such as wearable sensors through the extracted indicators and thresholds

    The WGD—A Dataset of Assembly Line Working Gestures for Ergonomic Analysis and Work-Related Injuries Prevention

    No full text
    This paper wants to stress the importance of human movement monitoring to prevent musculoskeletal disorders by proposing the WGD—Working Gesture Dataset, a publicly available dataset of assembly line working gestures that aims to be used for worker’s kinematic analysis. It contains kinematic data acquired from healthy subjects performing assembly line working activities using an optoelectronic motion capture system. The acquired data were used to extract quantitative indicators to assess how the working tasks were performed and to detect useful information to estimate the exposure to the factors that may contribute to the onset of musculoskeletal disorders. The obtained results demonstrate that the proposed indicators can be exploited to early detect incorrect gestures and postures and, consequently to prevent work-related disorders. The approach is general and independent on the adopted motion analysis system. It wants to provide indications for safely performing working activities. For example, the proposed WGD can also be used to evaluate the kinematics of workers in real working environments thanks to the adoption of unobtrusive measuring systems, such as wearable sensors through the extracted indicators and thresholds

    Development and Validation of a System for the Assessment and Recovery of Grip Force Control

    No full text
    The ability to finely control hand grip forces can be compromised by neuromuscular or musculoskeletal disorders. Therefore, it is recommended to include the training and assessment of grip force control in rehabilitation therapy. The benefits of robot-mediated therapy have been widely reported in the literature, and its combination with virtual reality and biofeedback can improve rehabilitation outcomes. However, the existing systems for hand rehabilitation do not allow both monitoring/training forces exerted by single fingers and providing biofeedback. This paper describes the development of a system for the assessment and recovery of grip force control. An exoskeleton for hand rehabilitation was instrumented to sense grip forces at the fingertips, and two operation modalities are proposed: (i) an active-assisted training to assist the user in reaching target force values and (ii) virtual reality games, in the form of tracking tasks, to train and assess the user’s grip force control. For the active-assisted modality, the control of the exoskeleton motors allowed generating additional grip force at the fingertips, confirming the feasibility of this modality. The developed virtual reality games were positively accepted by the volunteers and allowed evaluating the performance of healthy and pathological users

    Restoring Activities of Daily Living Using an EEG/EOG-Controlled Semiautonomous and Mobile Whole-Arm Exoskeleton in Chronic Stroke

    Get PDF
    Stroke survivors with chronic paralysis often have difficulties to perform various activities of daily living (ADLs), such as preparing a meal or eating and drinking independently. Recently, it was shown that a brain/neural hand exoskeleton can restore hand and finger function, but many stroke survivors suffer from motor deficits affecting their whole upper limb. Therefore, novel hybrid electroencephalography/electrooculography (EEG/EOG)-based brain/neural control paradigms were developed for guiding a whole-arm exoskeleton. It was unclear, however, whether hemiplegic stroke survivors are able to reliably use such brain/neural-controlled device. Here, we tested feasibility, safety, and user-friendliness of EEG/EOG-based brain/neural robotic control across five hemiplegic stroke survivors engaging in a drinking task that consisted of several subtasks (e.g., reaching, grasping, manipulating, and drinking). Reliability was assumed when at least 75% of subtasks were initialized within 3 s. Fluent control was assumed if average “time to initialize” each subtask ranged below 3 s. System's safety and user-friendliness were rated using Likert-scales. All chronic stroke patients were able to operate the system reliably and fluently. No undesired side effects were reported. Four participants rated the system as very user-friendly. These results show that chronic stroke survivors are capable of using an EEG/EOG-controlled semiautonomous whole-arm exoskeleton restoring ADLs

    Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs)

    Get PDF
    Abstract Background The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people’s motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people’s quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. Methods Ten impaired participants (5 males and 5 females, mean age 52 ± 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user’s experience in terms of perceived comfort, safety, and reliability. Results All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). Conclusions Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants

    Learning by demonstration for motion planning of upper-limb exoskeletons

    Get PDF
    The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured
    corecore